Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 77
Filtrar
1.
Med Image Anal ; 91: 102985, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37844472

RESUMO

This paper introduces the "SurgT: Surgical Tracking" challenge which was organized in conjunction with the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2022). There were two purposes for the creation of this challenge: (1) the establishment of the first standardized benchmark for the research community to assess soft-tissue trackers; and (2) to encourage the development of unsupervised deep learning methods, given the lack of annotated data in surgery. A dataset of 157 stereo endoscopic videos from 20 clinical cases, along with stereo camera calibration parameters, have been provided. Participants were assigned the task of developing algorithms to track the movement of soft tissues, represented by bounding boxes, in stereo endoscopic videos. At the end of the challenge, the developed methods were assessed on a previously hidden test subset. This assessment uses benchmarking metrics that were purposely developed for this challenge, to verify the efficacy of unsupervised deep learning algorithms in tracking soft-tissue. The metric used for ranking the methods was the Expected Average Overlap (EAO) score, which measures the average overlap between a tracker's and the ground truth bounding boxes. Coming first in the challenge was the deep learning submission by ICVS-2Ai with a superior EAO score of 0.617. This method employs ARFlow to estimate unsupervised dense optical flow from cropped images, using photometric and regularization losses. Second, Jmees with an EAO of 0.583, uses deep learning for surgical tool segmentation on top of a non-deep learning baseline method: CSRT. CSRT by itself scores a similar EAO of 0.563. The results from this challenge show that currently, non-deep learning methods are still competitive. The dataset and benchmarking tool created for this challenge have been made publicly available at https://surgt.grand-challenge.org/. This challenge is expected to contribute to the development of autonomous robotic surgery and other digital surgical technologies.


Assuntos
Procedimentos Cirúrgicos Robóticos , Humanos , Benchmarking , Algoritmos , Endoscopia , Processamento de Imagem Assistida por Computador/métodos
2.
Artigo em Inglês | MEDLINE | ID: mdl-38082961

RESUMO

Classification of electrocardiogram (ECG) signals plays an important role in the diagnosis of heart diseases. It is a complex and non-linear signal, which is the first option to preliminary identify specific pathologies/conditions (e.g., arrhythmias). Currently, the scientific community has proposed a multitude of intelligent systems to automatically process the ECG signal, through deep learning techniques, as well as machine learning, where this present high performance, showing state-of-the-art results. However, most of these models are designed to analyze the ECG signal individually, i.e., segment by segment. The scientific community states that to diagnose a pathology in the ECG signal, it is not enough to analyze a signal segment corresponding to the cardiac cycle, but rather an analysis of successive segments of cardiac cycles, to identify a pathological pattern.In this paper, an intelligent method based on a Convolutional Neural Network 1D paired with a Multilayer Perceptron (CNN 1D+MLP) was evaluated to automatically diagnose a set of pathological conditions, from the analysis of the individual segment of the cardiac cycle. In particular, we intend to study the robustness of the referred method in the analysis of several simultaneous ECG signal segments. Two ECG signal databases were selected, namely: MIT-BIH Arrhythmia Database (D1) and European ST-T Database (D2). The data was processed to create datasets with two, three and five segments in a row, to train and test the performance of the method. The method was evaluated in terms of classification metrics, such as: precision, recall, f1-score, and accuracy, as well as through the calculation of confusion matrices.Overall, the method demonstrated high robustness in the analysis of successive ECG signal segments, which we can conclude that it has the potential to detect anomalous patterns in the ECG signal. In the future, we will use this method to analyze the ECG signal coming in real-time, acquired by a wearable device, through a cloud system.Clinical Relevance-This study evaluates the potential of a deep learning method to classify one or several segments of the cardiac cycle and diagnose pathologies in ECG signals.


Assuntos
Aprendizado Profundo , Humanos , Redes Neurais de Computação , Arritmias Cardíacas/diagnóstico , Eletrocardiografia/métodos , Aprendizado de Máquina
3.
Artigo em Inglês | MEDLINE | ID: mdl-38082575

RESUMO

Breast cancer is the most prevalent type of cancer in women. Although mammography is used as the main imaging modality for the diagnosis, robust lesion detection in mammography images is a challenging task, due to the poor contrast of the lesion boundaries and the widely diverse sizes and shapes of the lesions. Deep Learning techniques have been explored to facilitate automatic diagnosis and have produced outstanding outcomes when used for different medical challenges. This study provides a benchmark for breast lesion detection in mammography images. Five state-of-art methods were evaluated on 1592 mammograms from a publicly available dataset (CBIS-DDSM) and compared considering the following seven metrics: i) mean Average Precision (mAP); ii) intersection over union; iii) precision; iv) recall; v) True Positive Rate (TPR); and vi) false positive per image. The CenterNet, YOLOv5, Faster-R-CNN, EfficientDet, and RetinaNet architectures were trained with a combination of the L1 localization loss and L2 localization loss. Despite all evaluated networks having mAP ratings greater than 60%, two managed to stand out among the evaluated networks. In general, the results demonstrate the efficiency of the model CenterNet with Hourglass-104 as its backbone and the model YOLOv5, achieving mAP scores of 70.71% and 69.36%, and TPR scores of 96.10% and 92.19%, respectively, outperforming the state-of-the-art models.Clinical Relevance - This study demonstrates the effectiveness of deep learning algorithms for breast lesion detection in mammography, potentially improving the accuracy and efficiency of breast cancer diagnosis.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Feminino , Humanos , Mamografia/métodos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Detecção Precoce de Câncer , Algoritmos
4.
Artigo em Inglês | MEDLINE | ID: mdl-38082637

RESUMO

Medical image segmentation is a paramount task for several clinical applications, namely for the diagnosis of pathologies, for treatment planning, and for aiding image-guided surgeries. With the development of deep learning, Convolutional Neural Networks (CNN) have become the state-of-the-art for medical image segmentation. However, issues are still raised concerning the precise object boundary delineation, since traditional CNNs can produce non-smooth segmentations with boundary discontinuities. In this work, a U-shaped CNN architecture is proposed to generate both pixel-wise segmentation and probabilistic contour maps of the object to segment, in order to generate reliable segmentations at the object's boundaries. Moreover, since the segmentation and contour maps must be inherently related to each other, a dual consistency loss that relates the two outputs of the network is proposed. Thus, the network is enforced to consistently learn the segmentation and contour delineation tasks during the training. The proposed method was applied and validated on a public dataset of cardiac 3D ultrasound images of the left ventricle. The results obtained showed the good performance of the method and its applicability for the cardiac dataset, showing its potential to be used in clinical practice for medical image segmentation.Clinical Relevance- The proposed network with dual consistency loss scheme can improve the performance of state-of-the-art CNNs for medical image segmentation, proving its value to be applied for computer-aided diagnosis.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Coração , Ventrículos do Coração
5.
Artigo em Inglês | MEDLINE | ID: mdl-38083151

RESUMO

Accurate lesion classification as benign or malignant in breast ultrasound (BUS) images is a critical task that requires experienced radiologists and has many challenges, such as poor image quality, artifacts, and high lesion variability. Thus, automatic lesion classification may aid professionals in breast cancer diagnosis. In this scope, computer-aided diagnosis systems have been proposed to assist in medical image interpretation, outperforming the intra and inter-observer variability. Recently, such systems using convolutional neural networks have demonstrated impressive results in medical image classification tasks. However, the lack of public benchmarks and a standardized evaluation method hampers the performance comparison of networks. This work is a benchmark for lesion classification in BUS images comparing six state-of-the-art networks: GoogLeNet, InceptionV3, ResNet, DenseNet, MobileNetV2, and EfficientNet. For each network, five input data variations that include segmentation information were tested to compare their impact on the final performance. The methods were trained on a multi-center BUS dataset (BUSI and UDIAT) and evaluated using the following metrics: precision, sensitivity, F1-score, accuracy, and area under the curve (AUC). Overall, the lesion with a thin border of background provides the best performance. For this input data, EfficientNet obtained the best results: an accuracy of 97.65% and an AUC of 96.30%.Clinical Relevance- This study showed the potential of deep neural networks to be used in clinical practice for breast lesion classification, also suggesting the best model choices.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Feminino , Humanos , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Redes Neurais de Computação , Ultrassonografia
6.
Artigo em Inglês | MEDLINE | ID: mdl-38083227

RESUMO

Left atrial appendage (LAA) is the major source of thromboembolism in patients with non-valvular atrial fibrillation. Currently, LAA occlusion can be offered as a treatment for these patients, obstructing the LAA through a percutaneously delivered device. Nevertheless, correct device sizing is a complex task, requiring manual analysis of medical images. This approach is sub-optimal, time-demanding, and highly variable between experts. Different solutions were proposed to improve intervention planning, but, no efficient solution is available to 2D ultrasound, which is the most used imaging modality for intervention planning and guidance. In this work, we studied the performance of recently proposed deep learning methods when applied for the LAA segmentation in 2D ultrasound. For that, it was created a 2D ultrasound database. Then, the performance of different deep learning methods, namely Unet, UnetR, AttUnet, TransAttUnet was assessed. All networks were compared using seven metrics: i) Dice coefficient; ii) Accuracy iii) Recall; iv) Specificity; v) Precision; vi) Hausdorff distance and vii) Average distance error. Overall, the results demonstrate the efficiency of AttUnet and TransAttUnet with dice scores of 88.62% and 89.28%, and accuracy of 88.25% and 86.30%, respectively. The current results demonstrate the feasibility of deep learning methods for LAA segmentation in 2D ultrasound.Clinical relevance- Our results proved the clinical potential of deep neural networks for the LAA anatomical analysis.


Assuntos
Apêndice Atrial , Aprendizado Profundo , Humanos , Apêndice Atrial/diagnóstico por imagem , Ecocardiografia Transesofagiana/métodos , Ultrassonografia , Bases de Dados Factuais
7.
Artigo em Inglês | MEDLINE | ID: mdl-38083246

RESUMO

Ultrasound (US) imaging is a widely used medical imaging modality for the diagnosis, monitoring, and surgical planning for kidney conditions. Thus, accurate segmentation of the kidney and internal structures in US images is essential for the assessment of kidney function and the detection of pathological conditions, such as cysts, tumors, and kidney stones. Therefore, there is a need for automated methods that can accurately segment the kidney and internal structures in US images. Over the years, automatic strategies were proposed for such purpose, with deep learning methods achieving the current state-of-the-art results. However, these strategies typically ignore the segmentation of the internal structures of the kidney. Moreover, they were evaluated in different private datasets, hampering the direct comparison of results, and making it difficult to determination the optimal strategy for this task. In this study, we perform a comparative analysis of 7 deep learning networks for the segmentation of the kidney and internal structures (Capsule, Central Echogenic Complex (CEC), Cortex and Medulla) in 2D US images in an open access multi-class kidney US dataset. The dataset includes 514 images, acquired in multiple clinical centers using different US machines and protocols. The dataset contains the annotation of two experts, but 321 images with complete segmentation of all 4 classes were used. Overall, the results demonstrate that the DeepLabV3+ network outperformed the inter-rater variability with a Dice score of 78.0% compared to 75.6% for inter-rater variability. Specifically, DeepLabV3Plus achieved mean Dice scores of 94.2% for the Capsule, 85.8% for the CEC, 62.4% for the Cortex, and 69.6% for the Medulla. These findings suggest the potential of deep learning-based methods in improving the accuracy of kidney segmentation in US images.Clinical Relevance- This study shows the potential of DL for improving accuracy of kidney segmentation in US, leading to increased diagnostic efficiency, and enabling new applications such as computer-aided diagnosis and treatment, ultimately resulting in improved patient outcomes and reduced healthcare costs.1.


Assuntos
Aprendizado Profundo , Humanos , Diagnóstico por Computador/métodos , Rim/diagnóstico por imagem , Semântica , Conjuntos de Dados como Assunto
8.
Artigo em Inglês | MEDLINE | ID: mdl-38083333

RESUMO

Breast cancer is a global public health concern. For women with suspicious breast lesions, the current diagnosis requires a biopsy, which is usually guided by ultrasound (US). However, this process is challenging due to the low quality of the US image and the complexity of dealing with the US probe and the surgical needle simultaneously, making it largely reliant on the surgeon's expertise. Some previous works employing collaborative robots emerged to improve the precision of biopsy interventions, providing an easier, safer, and more ergonomic procedure. However, for these equipment to be able to navigate around the breast autonomously, 3D breast reconstruction needs to be available. The accuracy of these systems still needs to improve, with the 3D reconstruction of the breast being one of the biggest focuses of errors. The main objective of this work is to develop a method to obtain a robust 3D reconstruction of the patient's breast, based on RGB monocular images, which later can be used to compute the robot's trajectories for the biopsy. To this end, depth estimation techniques will be developed, based on a deep learning architecture constituted by a CNN, LSTM, and MLP, to generate depth maps capable of being converted into point clouds. After merging several from multiple points of view, it is possible to generate a real-time reconstruction of the breast as a mesh. The development and validation of our method was performed using a previously described synthetic dataset. Hence, this procedure takes RGB images and the cameras' position and outputs the breasts' meshes. It has a mean error of 3.9 mm and a standard deviation of 1.2 mm. The final results attest to the ability of this methodology to predict the breast's shape and size using monocular images.Clinical Relevance- This work proposes a method based on artificial intelligence and monocular RGB images to obtain the breast's volume during robotic guided breast biopsies, improving their execution and safety.


Assuntos
Mamoplastia , Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Feminino , Inteligência Artificial , Mama/patologia
9.
Med Image Anal ; 89: 102888, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37451133

RESUMO

Formalizing surgical activities as triplets of the used instruments, actions performed, and target anatomies is becoming a gold standard approach for surgical activity modeling. The benefit is that this formalization helps to obtain a more detailed understanding of tool-tissue interaction which can be used to develop better Artificial Intelligence assistance for image-guided surgery. Earlier efforts and the CholecTriplet challenge introduced in 2021 have put together techniques aimed at recognizing these triplets from surgical footage. Estimating also the spatial locations of the triplets would offer a more precise intraoperative context-aware decision support for computer-assisted intervention. This paper presents the CholecTriplet2022 challenge, which extends surgical action triplet modeling from recognition to detection. It includes weakly-supervised bounding box localization of every visible surgical instrument (or tool), as the key actors, and the modeling of each tool-activity in the form of triplet. The paper describes a baseline method and 10 new deep learning algorithms presented at the challenge to solve the task. It also provides thorough methodological comparisons of the methods, an in-depth analysis of the obtained results across multiple metrics, visual and procedural challenges; their significance, and useful insights for future research directions and applications in surgery.


Assuntos
Inteligência Artificial , Cirurgia Assistida por Computador , Humanos , Endoscopia , Algoritmos , Cirurgia Assistida por Computador/métodos , Instrumentos Cirúrgicos
10.
Heliyon ; 9(6): e16297, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37346350

RESUMO

Background: The daily monitoring of the physiological parameters is essential for monitoring health condition and to prevent health problems. This is possible due to the democratization of numerous types of medical devices and promoted by the interconnection between these and smartphones. Nevertheless, medical devices that connect to smartphones are typically limited to manufacturers applications. Objectives: This paper proposes an intelligent scanning system to simplify the collection of data displayed on different medical devices screens, recognizing the values, and optionally integrating them, through open protocols, with centralized databases. Methods: To develop this system, a dataset comprising 1614 images of medical devices was created, obtained from manufacturer catalogs, photographs and other public datasets. Then, three object detector algorithms (yolov3, Single-Shot Detector [SSD] 320 × 320 and SSD 640 × 640) were trained to detect digits and acronyms/units of measurements presented by medical devices. These models were tested under 3 different conditions to detect digits and acronyms/units as a single object (single label), digits and acronyms/units as independent objects (two labels), and digits and acronyms/units individually (fifteen labels). Models trained for single and two labels were completed with a convolutional neural network (CNN) to identify the detected objects. To group the recognized digits, a condition-tree based strategy on density spatial clustering was used. Results: The most promising approach was the use of the SSD 640 × 640 for fifteen labels. Conclusion: Lastly, as future work, it is intended to convert this system to a mobile environment to accelerate and streamline the process of inserting data into mobile health (mhealth) applications.

11.
Med Image Anal ; 88: 102833, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37267773

RESUMO

In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain. Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context. However, manual segmentation of cerebral structures is time-consuming and prone to error and inter-observer variability. Therefore, we organized the Fetal Tissue Annotation (FeTA) Challenge in 2021 in order to encourage the development of automatic segmentation algorithms on an international level. The challenge utilized FeTA Dataset, an open dataset of fetal brain MRI reconstructions segmented into seven different tissues (external cerebrospinal fluid, gray matter, white matter, ventricles, cerebellum, brainstem, deep gray matter). 20 international teams participated in this challenge, submitting a total of 21 algorithms for evaluation. In this paper, we provide a detailed analysis of the results from both a technical and clinical perspective. All participants relied on deep learning methods, mainly U-Nets, with some variability present in the network architecture, optimization, and image pre- and post-processing. The majority of teams used existing medical imaging deep learning frameworks. The main differences between the submissions were the fine tuning done during training, and the specific pre- and post-processing steps performed. The challenge results showed that almost all submissions performed similarly. Four of the top five teams used ensemble learning methods. However, one team's algorithm performed significantly superior to the other submissions, and consisted of an asymmetrical U-Net network architecture. This paper provides a first of its kind benchmark for future automatic multi-tissue segmentation algorithms for the developing human brain in utero.


Assuntos
Processamento de Imagem Assistida por Computador , Substância Branca , Gravidez , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Cabeça , Feto/diagnóstico por imagem , Algoritmos , Imageamento por Ressonância Magnética/métodos
12.
Sensors (Basel) ; 23(9)2023 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-37177630

RESUMO

Pectus carinatum (PC) is a chest deformity caused by disproportionate growth of the costal cartilages compared with the bony thoracic skeleton, pulling the sternum forwards and leading to its protrusion. Currently, the most common non-invasive treatment is external compressive bracing, by means of an orthosis. While this treatment is widely adopted, the correct magnitude of applied compressive forces remains unknown, leading to suboptimal results. Moreover, the current orthoses are not suitable to monitor the treatment. The purpose of this study is to design a force measuring system that could be directly embedded into an existing PC orthosis without relevant modifications in its construction. For that, inspired by the currently commercially available products where a solid silicone pad is used, three concepts for silicone-based sensors, two capacitive and one magnetic type, are presented and compared. Additionally, a concept of a full pipeline to capture and store the sensor data was researched. Compression tests were conducted on a calibration machine, with forces ranging from 0 N to 300 N. Local evaluation of sensors' response in different regions was also performed. The three sensors were tested and then compared with the results of a solid silicon pad. One of the capacitive sensors presented an identical response to the solid silicon while the other two either presented poor repeatability or were too stiff, raising concerns for patient comfort. Overall, the proposed system demonstrated its potential to measure and monitor orthosis's applied forces, corroborating its potential for clinical practice.


Assuntos
Pectus Carinatum , Humanos , Pectus Carinatum/terapia , Silício , Esterno , Braquetes , Pressão , Resultado do Tratamento
13.
J Hazard Mater Adv ; 10: 100315, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37193121

RESUMO

The COVID-19 pandemic caused by the SARS-CoV-2 virus led to changes in the lifestyle and human behaviour, which resulted in different consumption patterns of some classes of pharmaceuticals including curative, symptom-relieving, and psychotropic drugs. The trends in the consumption of these compounds are related to their concentrations in wastewater systems, since incompletely metabolised drugs (or their metabolites back transformed into the parental form) may be detected and quantified by analytical methods. Pharmaceuticals are highly recalcitrant compounds and conventional activated sludge processes implemented in wastewater treatment plants (WWTP) are ineffective at degrading these substances. As a results, these compounds end up in waterways or accumulate in the sludge, being a serious concern given their potential effects on ecosystems and public health. Therefore, it is crucial to evaluate the presence of pharmaceuticals in water and sludge to assist in the search for more effective processes. In this work, eight pharmaceuticals from five therapeutic classes were analysed in wastewater and sludge samples collected in two WWTP located in the Northern Portugal, during the third COVID-19 epidemic wave in Portugal. The two WWTP demonstrated a similar pattern with respect to the concentration levels in that period. However, the drugs loads reaching each WWTP were dissimilar when normalising the concentrations to the inlet flow rate. Acetaminophen (ACET) was the compound detected at highest concentrations in aqueous samples of both WWTP (98. 516 µg L - 1 in WWTP2 and 123. 506 µg L - 1in WWTP1), indicating that this drug is extensively used without the need of a prescription, known of general public knowledge as an antipyretic and analgesic agent to treat pain and fever. The concentrations determined in the sludge samples were below 1.65 µg g - 1 in both WWTP, the highest value being found for azithromycin (AZT). This result may be justified by the physico-chemical characteristics of the compound that favour its adsorption to the sludge surface through ionic interactions. It was not possible to establish a clear relationship between the incidence of COVID-19 cases in the sewer catchment and the concentration of drugs detected in the same period. However, looking at the data obtained, the high incidence of COVID-19 in January 2021 is in line with the high concentration of drugs detected in the aqueous and sludge samples but prediction of drug load from viral load data was unfeasible.

14.
Med Image Anal ; 86: 102803, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37004378

RESUMO

Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of combination delivers more comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and the assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms from the competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.


Assuntos
Benchmarking , Laparoscopia , Humanos , Algoritmos , Salas Cirúrgicas , Fluxo de Trabalho , Aprendizado Profundo
15.
Sensors (Basel) ; 23(4)2023 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-36850436

RESUMO

Breast cancer is the most prevalent cancer in the world and the fifth-leading cause of cancer-related death. Treatment is effective in the early stages. Thus, a need to screen considerable portions of the population is crucial. When the screening procedure uncovers a suspect lesion, a biopsy is performed to assess its potential for malignancy. This procedure is usually performed using real-time Ultrasound (US) imaging. This work proposes a visualization system for US breast biopsy. It consists of an application running on AR glasses that interact with a computer application. The AR glasses track the position of QR codes mounted on an US probe and a biopsy needle. US images are shown in the user's field of view with enhanced lesion visualization and needle trajectory. To validate the system, latency of the transmission of US images was evaluated. Usability assessment compared our proposed prototype with a traditional approach with different users. It showed that needle alignment was more precise, with 92.67 ± 2.32° in our prototype versus 89.99 ± 37.49° in a traditional system. The users also reached the lesion more accurately. Overall, the proposed solution presents promising results, and the use of AR glasses as a tracking and visualization device exhibited good performance.


Assuntos
Realidade Aumentada , Feminino , Humanos , Interface Usuário-Computador , Ultrassonografia Mamária , Ultrassonografia , Biópsia
16.
Sci Rep ; 13(1): 761, 2023 01 14.
Artigo em Inglês | MEDLINE | ID: mdl-36641527

RESUMO

Chronic Venous Disorders (CVD) of the lower limbs are one of the most prevalent medical conditions, affecting 35% of adults in Europe and North America. Due to the exponential growth of the aging population and the worsening of CVD with age, it is expected that the healthcare costs and the resources needed for the treatment of CVD will increase in the coming years. The early diagnosis of CVD is fundamental in treatment planning, while the monitoring of its treatment is fundamental to assess a patient's condition and quantify the evolution of CVD. However, correct diagnosis relies on a qualitative approach through visual recognition of the various venous disorders, being time-consuming and highly dependent on the physician's expertise. In this paper, we propose a novel automatic strategy for the joint segmentation and classification of CVDs. The strategy relies on a multi-task deep learning network, denominated VENet, that simultaneously solves segmentation and classification tasks, exploiting the information of both tasks to increase learning efficiency, ultimately improving their performance. The proposed method was compared against state-of-the-art strategies in a dataset of 1376 CVD images. Experiments showed that the VENet achieved a classification performance of 96.4%, 96.4%, and 97.2% for accuracy, precision, and recall, respectively, and a segmentation performance of 75.4%, 76.7.0%, 76.7% for the Dice coefficient, precision, and recall, respectively. The joint formulation increased the robustness of both tasks when compared to the conventional classification or segmentation strategies, proving its added value, mainly for the segmentation of small lesions.


Assuntos
Doenças Cardiovasculares , Redes Neurais de Computação , Veias , Idoso , Humanos , Europa (Continente) , Processamento de Imagem Assistida por Computador/métodos , América do Norte , Doença Crônica
17.
Sensors (Basel) ; 22(19)2022 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-36236577

RESUMO

The increase of the aging population brings numerous challenges to health and aesthetic segments. Here, the use of laser therapy for dermatology is expected to increase since it allows for non-invasive and infection-free treatments. However, existing laser devices require doctors' manually handling and visually inspecting the skin. As such, the treatment outcome is dependent on the user's expertise, which frequently results in ineffective treatments and side effects. This study aims to determine the workspace and limits of operation of laser treatments for vascular lesions of the lower limbs. The results of this study can be used to develop a robotic-guided technology to help address the aforementioned problems. Specifically, workspace and limits of operation were studied in eight vascular laser treatments. For it, an electromagnetic tracking system was used to collect the real-time positioning of the laser during the treatments. The computed average workspace length, height, and width were 0.84 ± 0.15, 0.41 ± 0.06, and 0.78 ± 0.16 m, respectively. This corresponds to an average volume of treatment of 0.277 ± 0.093 m3. The average treatment time was 23.2 ± 10.2 min, with an average laser orientation of 40.6 ± 5.6 degrees. Additionally, the average velocities of 0.124 ± 0.103 m/s and 31.5 + 25.4 deg/s were measured. This knowledge characterizes the vascular laser treatment workspace and limits of operation, which may ease the understanding for future robotic system development.


Assuntos
Robótica , Extremidade Inferior/cirurgia , Robótica/métodos , Resultado do Tratamento
18.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1016-1019, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36083940

RESUMO

Cephalometric analysis is an important and routine task in the medical field to assess craniofacial development and to diagnose cranial deformities and midline facial abnormalities. The advance of 3D digital techniques potentiated the development of 3D cephalometry, which includes the localization of cephalometric landmarks in the 3D models. However, manual labeling is still applied, being a tedious and time-consuming task, highly prone to intra/inter-observer variability. In this paper, a framework to automatically locate cephalometric landmarks in 3D facial models is presented. The landmark detector is divided into two stages: (i) creation of 2D maps representative of the 3D model; and (ii) landmarks' detection through a regression convolutional neural network (CNN). In the first step, the 3D facial model is transformed to 2D maps retrieved from 3D shape descriptors. In the second stage, a CNN is used to estimate a probability map for each landmark using the 2D representations as input. The detection method was evaluated in three different datasets of 3D facial models, namely the Texas 3DFR, the BU3DFE, and the Bosphorus databases. An average distance error of 2.3, 3.0, and 3.2 mm were obtained for the landmarks evaluated on each dataset. The obtained results demonstrated the accuracy of the method in different 3D facial datasets with a performance competitive to the state-of-the-art methods, allowing to prove its versability to different 3D models. Clinical Relevance- Overall, the performance of the landmark detector demonstrated its potential to be used for 3D cephalometric analysis.


Assuntos
Pontos de Referência Anatômicos , Imageamento Tridimensional , Pontos de Referência Anatômicos/diagnóstico por imagem , Cefalometria/métodos , Face/anatomia & histologia , Face/diagnóstico por imagem , Humanos , Imageamento Tridimensional/métodos , Reprodutibilidade dos Testes
19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 3878-3881, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36085645

RESUMO

Automatic lesion segmentation in breast ultrasound (BUS) images aids in the diagnosis of breast cancer, the most common type of cancer in women. Accurate lesion segmentation in ultrasound images is a challenging task due to speckle noise, artifacts, shadows, and lesion variability in size and shape. Recently, convolutional neural networks have demonstrated impressive results in medical image segmentation tasks. However, the lack of public benchmarks and a standardized evaluation method hampers the networks' performance comparison. This work presents a benchmark of seven state-of-the-art methods for the automatic breast lesion segmentation task. The methods were evaluated on a multi-center BUS dataset composed of three public datasets. Specifically, the U-Net, Dynamic U-Net, Semantic Segmentation Deep Residual Network with Variational Autoencoder (SegResNetVAE), U-Net Transformers, Residual Feedback Network, Multiscale Dual Attention-Based Network, and Global Guidance Network (GG-Net) architectures were evaluated. The training was performed with a combination of the cross-entropy and Dice loss functions and the overall performance of the networks was assessed using the Dice coefficient, Jaccard index, accuracy, recall, specificity, and precision. Despite all networks having obtained Dice scores superior to 75%, the GG-Net and SegResNetVAE architectures outperform the remaining methods, achieving 82.56% and 81.90%, respectively. Clinical Relevance- The results of this study allowed to prove the potential of deep neural networks to be used in clinical practice for breast lesion segmentation also suggesting the best model choices.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Artefatos , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Ultrassonografia , Ultrassonografia Mamária
20.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 865-868, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36085709

RESUMO

One in every eight women will get breast cancer during their lifetime. Therefore, the early diagnosis of the lesions is fundamental to improve the chances of recovery. To find breast cancers, breast screening using techniques such as mammography and ultrasound (US) imaging scans are often used. When a lesion is found, a breast biopsy is performed to extract a tissue sample for analysis. The breast biopsy is usually assisted by an US to help find the lesion and guide the needle to its location. However, the identification of the needle tip in US image is challenging, possibly resulting in puncture failures. In this paper, we intend to study the potential of a sensorized needle guide system that provides information about the needle angle and displacement in respect to the US probe. Laboratory tests were initially conducted to evaluate the accuracy of each sensor in controlled conditions. After, a practical experiment with the US probe, working as a proof of concept, was performed. The angle sensor showed a root mean square error (RMSE) of 0.48 degrees and the displacement sensor showed a RMSE of 0.26mm after being calibrated. For the US probe tests, the displacement sensor shows high errors in the range of 1.19mm to 2.05mm due to mechanical reasons. Overall, the proposed system showed its potential to be used to accurately estimate needle tip localization throughout breast biopsies guided by US, corroborating its potential clinical application. Clinical relevance - Potential for clinical application where precise needle localization in ultrasound image is required.


Assuntos
Neoplasias da Mama , Ultrassonografia Mamária , Biópsia , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Mamografia , Ultrassonografia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...